53 research outputs found

    Full Body Acting Rehearsal in a Networked Virtual Environment-A Case Study

    Get PDF
    In order to rehearse for a play or a scene from a movie, it is generally required that the actors are physically present at the same time in the same place. In this paper we present an example and experience of a full body motion shared virtual environment (SVE) for rehearsal. The system allows actors and directors to meet in an SVE in order to rehearse scenes for a play or a movie, that is, to perform some dialogue and blocking (positions, movements, and displacements of actors in the scene) rehearsal through a full body interactive virtual reality (VR) system. The system combines immersive VR rendering techniques as well as network capabilities together with full body tracking. Two actors and a director rehearsed from separate locations. One actor and the director were in London (located in separate rooms) while the second actor was in Barcelona. The Barcelona actor used a wide field-of-view head-tracked head-mounted display, and wore a body suit for real-time motion capture and display. The London actor was in a Cave system, with head and partial body tracking. Each actor was presented to the other as an avatar in the shared virtual environment, and the director could see the whole scenario on a desktop display, and intervene by voice commands. A video stream in a window displayed in the virtual environment also represented the director. The London participant was a professional actor, who afterward commented on the utility of the system for acting rehearsal. It was concluded that full body tracking and corresponding real-time display of all the actors' movements would be a critical requirement, and that blocking was possible down to the level of detail of gestures. Details of the implementation, actors, and director experiences are provided

    Repurpose 2D Character Animations for a VR Environment Using BDH Shape Interpolation.

    Get PDF
    Virtual Reality technology has spread rapidly in recent years. However, its growth risks ending soon due to the absence of quality content, except for few exceptions. We present an original framework that allows artists to use 2D characters and animations in a 3D Virtual Reality environment, in order to give an easier access to the production of content for the platform. In traditional platforms, 2D animation represents a more economic and immediate alternative to 3D. The challenge in adapting 2D characters to a 3D environment is to interpret the missing depth information. A 2D character is actually flat, so there is not any depth information, and every body part is at the same level of the others. We exploit mesh interpolation, billboarding and parallax scrolling to simulate the depth between each body segment of the character. We have developed a prototype of the system, and extensive tests with a 2D animation production show the effectiveness of our framework

    Acting rehearsal in collaborative multimodal mixed reality environments

    Get PDF
    This paper presents the use of our multimodal mixed reality telecommunication system to support remote acting rehearsal. The rehearsals involved two actors, located in London and Barcelona, and a director in another location in London. This triadic audiovisual telecommunication was performed in a spatial and multimodal collaborative mixed reality environment based on the 'destination-visitor' paradigm, which we define and put into use. We detail our heterogeneous system architecture, which spans the three distributed and technologically asymmetric sites, and features a range of capture, display, and transmission technologies. The actors' and director's experience of rehearsing a scene via the system are then discussed, exploring successes and failures of this heterogeneous form of telecollaboration. Overall, the common spatial frame of reference presented by the system to all parties was highly conducive to theatrical acting and directing, allowing blocking, gross gesture, and unambiguous instruction to be issued. The relative inexpressivity of the actors' embodiments was identified as the central limitation of the telecommunication, meaning that moments relying on performing and reacting to consequential facial expression and subtle gesture were less successful

    Real-Time Global Illumination for VR Applications

    Full text link

    With you – an experimental end-to-end telepresence system using video-based reconstruction

    Get PDF
    We introduce withyou, our telepresence research platform. A systematic explanation of the theory brings together the linked nature of non-verbal communication and how it is influenced by technology. This leads to functional requirements for telepresence, in terms of the balance of visual, spatial and temporal qualities. The first end-to-end description of withyou describes all major processes and the display and capture environment. This includes two approaches to reconstructing the human form in 3D, from live video. An unprecedented characterization of our approach is given in terms of the above qualities, and influences of approach. This leads to non-functional requirements in terms of number and place of cameras and the avoidance of a resultant bottlekneck. Proposals are given for improved distribution of processes across networks, computers, and multi-core CPU and GPU. Simple conservative estimation shows that both approaches should meet our requirements. One is implemented and shown to meet minimum and come close to desirable requirements

    Is my hand connected to my body? The impact of body continuity and arm alignment on the virtual hand illusion

    Get PDF
    When a rubber hand is placed on a table top in a plausible position as if part of a person"s body, and is stroked synchronously with the person"s corresponding hidden real hand, an illusion of ownership over the rubber hand can occur (Botvinick and Cohen 1998). A similar result has been found with respect to a virtual hand portrayed in a virtual environment, a virtual hand illusion (Slater et al. 2008). The conditions under which these illusions occur have been the subject of considerable study. Here we exploited the flexibility of virtual reality to examine four contributory factors: visuo-tactile synchrony while stroking the virtual and the real arms, body continuity, alignment between the real and virtual arms, and the distance between them. We carried out three experiments on a total of 32 participants where these factors were varied. The results show that the subjective illusion of ownership over the virtual arm and the time to evoke this illusion are highly dependent on synchronous visuo-tactile stimulation and on connectivity of the virtual arm with the rest of the virtual body. The alignment between the real and virtual arms and the distance between these were less important. It was found that proprioceptive drift was not a sensitive measure of the illusion, but was only related to the distance between the real and virtual arms

    Beaming into the Rat World: Enabling Real-Time Interaction between Rat and Human Each at Their Own Scale

    Get PDF
    Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale
    corecore